DE eng

Search in the Catalogues and Directories

Hits 1 – 14 of 14

1
Higher-order Derivatives of Weighted Finite-state Machines ...
BASE
Show details
2
On Finding the K-best Non-projective Dependency Trees ...
BASE
Show details
3
On Finding the K-best Non-projective Dependency Trees ...
Zmigrod, Ran; Vieira, Tim; Cotterell, Ryan. - : ETH Zurich, 2021
BASE
Show details
4
Efficient computation of expectations under spanning tree distributions ...
Zmigrod, Ran; Vieira, Tim; Cotterell, Ryan. - : ETH Zurich, 2021
BASE
Show details
5
Higher-order Derivatives of Weighted Finite-state Machines ...
Zmigrod, Ran; Vieira, Tim; Cotterell, Ryan. - : ETH Zurich, 2021
BASE
Show details
6
On Finding the K-best Non-projective Dependency Trees
In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (2021)
BASE
Show details
7
Higher-order Derivatives of Weighted Finite-state Machines
In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (2021)
BASE
Show details
8
Efficient computation of expectations under spanning tree distributions
In: Transactions of the Association for Computational Linguistics, 9 (2021)
Abstract: We give a general framework for inference in spanning tree models. We propose unified algorithms for the important cases of first-order expectations and second-order expectations in edge-factored, non-projective spanning-tree models. Our algorithms exploit a fundamental connection between gradients and expectations, which allows us to derive efficient algorithms. These algorithms are easy to implement with or without automatic differentiation software. We motivate the development of our framework with several cautionary tales of previous research, which has developed numerous inefficient algorithms for computing expectations and their gradients. We demonstrate how our framework efficiently computes several quantities with known algorithms, including the expected attachment score, entropy, and generalized expectation criteria. As a bonus, we give algorithms for quantities that are missing in the literature, including the KL divergence. In all cases, our approach matches the efficiency of existing algorithms and, in several cases, reduces the runtime complexity by a factor of the sentence length. We validate the implementation of our framework through runtime experiments. We find our algorithms are up to 15 and 9 times faster than previous algorithms for computing the Shannon entropy and the gradient of the generalized expectation objective, respectively. ; ISSN:2307-387X
URL: https://doi.org/10.3929/ethz-b-000517632
https://hdl.handle.net/20.500.11850/517632
BASE
Hide details
9
Efficient Sampling of Dependency Structure
In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2021)
BASE
Show details
10
SIGMORPHON 2020 Shared Task 0: Typologically Diverse Morphological Inflection ...
BASE
Show details
11
Information-Theoretic Probing for Linguistic Structure ...
BASE
Show details
12
Information-Theoretic Probing for Linguistic Structure ...
BASE
Show details
13
Please Mind the Root: Decoding Arborescences for Dependency Parsing
In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2020)
BASE
Show details
14
Information-Theoretic Probing for Linguistic Structure
In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (2020)
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
14
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern